5 research outputs found
On Resource Pooling and Separation for LRU Caching
Caching systems using the Least Recently Used (LRU) principle have now become
ubiquitous. A fundamental question for these systems is whether the cache space
should be pooled together or divided to serve multiple flows of data item
requests in order to minimize the miss probabilities. In this paper, we show
that there is no straight yes or no answer to this question, depending on
complex combinations of critical factors, including, e.g., request rates,
overlapped data items across different request flows, data item popularities
and their sizes. Specifically, we characterize the asymptotic miss
probabilities for multiple competing request flows under resource pooling and
separation for LRU caching when the cache size is large.
Analytically, we show that it is asymptotically optimal to jointly serve
multiple flows if their data item sizes and popularity distributions are
similar and their arrival rates do not differ significantly; the
self-organizing property of LRU caching automatically optimizes the resource
allocation among them asymptotically. Otherwise, separating these flows could
be better, e.g., when data sizes vary significantly. We also quantify critical
points beyond which resource pooling is better than separation for each of the
flows when the overlapped data items exceed certain levels. Technically, we
generalize existing results on the asymptotic miss probability of LRU caching
for a broad class of heavy-tailed distributions and extend them to multiple
competing flows with varying data item sizes, which also validates the Che
approximation under certain conditions. These results provide new insights on
improving the performance of caching systems
Optimal Edge Caching For Individualized Demand Dynamics
The ever-growing end user data demands, and the simultaneous reductions in
memory costs are fueling edge-caching deployments. Caching at the edge is
substantially different from that at the core and needs to take into account
the nature of individual data demands. For example, an individual user may not
be interested in requesting the same data item again, if it has recently
requested it. Such individual dynamics are not apparent in the aggregated data
requests at the core and have not been considered in popularity-driven caching
designs for the core. Hence, these traditional caching policies could induce
significant inefficiencies when applied at the edges. To address this issue, we
develop new edge caching policies optimized for the individual demands that
also leverage overhearing opportunities at the wireless edge. With the
objective of maximizing the hit ratio, the proposed policies will actively
evict the data items that are not likely to be requested in the near future,
and strategically bring them back into the cache through overhearing when they
are likely to be popular again. Both theoretical analysis and numerical
simulations demonstrate that the proposed edge caching policies could
outperform the popularity-driven policies that are optimal at the core
Asymptotic Miss Ratio of LRU Caching with Consistent Hashing
To efficiently scale data caching infrastructure to support emerging big data
applications, many caching systems rely on consistent hashing to group a large
number of servers to form a cooperative cluster. These servers are organized
together according to a random hash function. They jointly provide a unified
but distributed hash table to serve swift and voluminous data item requests.
Different from the single least-recently-used (LRU) server that has already
been extensively studied, theoretically characterizing a cluster that consists
of multiple LRU servers remains yet to be explored. These servers are not
simply added together; the random hashing complicates the behavior. To this
end, we derive the asymptotic miss ratio of data item requests on a LRU cluster
with consistent hashing. We show that these individual cache spaces on
different servers can be effectively viewed as if they could be pooled together
to form a single virtual LRU cache space parametrized by an appropriate cache
size. This equivalence can be established rigorously under the condition that
the cache sizes of the individual servers are large. For typical data caching
systems this condition is common. Our theoretical framework provides a
convenient abstraction that can directly apply the results from the simpler
single LRU cache to the more complex LRU cluster with consistent hashing.Comment: 11 pages, 4 figure